Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 64
Filter
1.
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) ; : 855-856, 2022.
Article in English | Web of Science | ID: covidwho-1927532

ABSTRACT

One key factor in stopping the spread of COVID-19 is practicing social distancing. Visualizing possible sneeze droplets' transmission routes in front of an infected person might be an effective way to help people understand the importance of social distancing. This paper presents a mobile virtual reality (VR) interface that helps people visualize droplet dispersion from the target person's view. We implemented a VR application to visualize and interact with the sneeze simulation data immersively. Our application provides an easy way to communicate the correlation between social distance and infected droplets exposure, which is difficult to achieve in the real world.

2.
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) ; : 414-415, 2022.
Article in English | Web of Science | ID: covidwho-1927531

ABSTRACT

When on life support, the patients' lives not only depend on the availability of the medical devices but also on the staff's expertise to use them. With the example of ECMO devices, which were highly demanded during the COVID-19 pandemic but rarely used until then, we developed a VR training for priming an ECMO to provide the required expertise in a standardized and simple way on a global scale. This paper presents the development of VR training with feedback from medical and technical experts.

3.
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) ; : 112-116, 2022.
Article in English | Web of Science | ID: covidwho-1927530

ABSTRACT

Providing care to seniors and adults with Developmental Disabilities (DD) has seen increased use and development of assistive technologies including service robots. Such robots ease the challenges associated with care, companionship, medication intake, and fall prevention, among others. Research and development in this field rely on in-person data collection to ensure proper robot navigation, interactions, and service. However, the current COVID-19 pandemic has caused the implementation of physical distancing and access restrictions to long-term care facilities, thus making data collection very difficult. This traditional method poses numerous challenges as videos may not be representative of the population in terms of how people move, interact with the environment, or fall. In this paper, we present the development of a VR simulator for robotics navigation and fall detection with digital twins as a solution to test the virtual robot without having access to the real physical location, or real people. The development process required the development of virtual sensors that are able to create LIDAR data for the virtual robot to navigate and detect obstacles. Preliminary testing has allowed us to obtain promising results for the virtual simulator to train a service robot to navigate and detect falls. Our results include virtual maps, robot navigation, and fall detection.

4.
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) ; : 81-84, 2022.
Article in English | Web of Science | ID: covidwho-1927529

ABSTRACT

The COVID-19 pandemic created the largest disruption of education systems in history. Distance learning through online platforms were part of the solution. However, preclinical exercises to train psychomotor skills of learners have been challenging. The use of virtual reality (VR) in training medical students is innovative and has attracted much attention. In this study, authors presented the development of multi-user VR application for dental education. Our preliminary results showed the potential of using VR in the preclinical curriculum of dental education.

5.
IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) ; : 1-6, 2022.
Article in English | Web of Science | ID: covidwho-1927528

ABSTRACT

Cardboard-based virtual reality is an affordable solution for experiencing virtual reality content. Particularly during the COVID-19 pandemic, several studies used cardboard-based virtual reality remotely to minimize viral spread. We conducted a study to explore the potentials of low-cost virtual reality on participants' sense of presence and body ownership illusion in our research lab, thereby providing a controlled research setting. Our 2 (Avatar: realistic vs. mannequin self-avatar) x 2 (Breathing;breathing vs. no breathing motion) study investigated presence and body ownership when participants were instructed to observe a virtual environment passively through a cardboard-based virtual reality application while being embodied as a self-avatar. Our study's results indicated that: (1) the mannequin self-avatar exerted a stronger effect on participants' presence;(2) younger participants who experienced the mannequin avatar reported stronger body ownership compared with older participants;and (3) while experiencing a mannequin avatar with no breathing motion, participants with prior VR experience reported higher body ownership illusion compared with participants with no prior VR experience. In this paper, we discuss our findings, as well as the study's limitations and future research directions.

6.
22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) ; : 1544-1554, 2022.
Article in English | Web of Science | ID: covidwho-1916009

ABSTRACT

Despite significant progress in the past few years, machine learning systems are still often viewed as "black boxes," which lack the ability to explain their output decisions. In high-stakes situations such as healthcare, there is a need for explainable AI (XAI) tools that can help open up this black box. In contrast to approaches which largely tackle classification problems in the medical imaging domain, we address the less-studied problem of explainable image retrieval. We test our approach on a COVID-19 chest X-ray dataset and the ISIC 2017 skin lesion dataset, showing that saliency maps help reveal the image features used by models to determine image similarity. We evaluated three different saliency algorithms, which were either occlusion-based, attention-based, or relied on a form of activation mapping. We also develop quantitative evaluation metrics that allow us to go beyond simple qualitative comparisons of the different saliency algorithms. Our results have the potential to aid clinicians when viewing medical images and addresses an urgent need for interventional tools in response to COVID-19. The source code is publicly available at: https://gitlab.kitware.com/brianhhu/x-mir.

7.
22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) ; : 1215-1223, 2022.
Article in English | Web of Science | ID: covidwho-1916008

ABSTRACT

In the COVID-19 situation, face masks have become an essential part of our daily life. As mask occludes most prominent facial characteristics, it brings new challenges to the existing facial recognition systems. This paper presents an idea to consider forehead creases (under surprise facial expression) as a new biometric modality to authenticate mask-wearing faces. The forehead biometrics utilizes the creases and textural skin patterns appearing due to voluntary contraction of the forehead region as features. The proposed framework is an efficient and generalizable deep learning framework for forehead recognition. Face-selfie images are collected using smartphone's frontal camera in an unconstrained environment with various indoor/outdoor realistic environments. Acquired forehead images are first subjected to a segmentation model that results in rectangular Region Of Interest (ROI's). A set of convolutional feature maps are subsequently obtained using a backbone network. The primary embeddings are enriched using a dual attention network (DANet) to induce discriminative feature learning. The attention-empowered embeddings are then optimized using Large Margin Cosine Loss (LMCL) followed by Focal Loss to update weights for inducting robust training and better feature discriminating capabilities. Our system is end-to-end and few-shot;thus, it is very efficient in memory requirements and recognition rate. Besides, we present a forehead image dataset (BITS-IITMandi-ForeheadCreases Images Database1) that has been recorded in two sessions from 247 subjects containing a total of 4,964 selfie-face mask images. To the best of our knowledge, this is the first to date mobile-based forehead dataset and is being made available along with the mobile application in the public domain. The proposed system has achieved high performance results in both closed-set, i.e., CRR of 99.08% and EER of 0.44% and open-set matching, i.e., CRR: 97.84%, EER: 12.40% which justifies the significance of using forehead as a biometric modality.

8.
22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) ; : 1141-1150, 2022.
Article in English | Web of Science | ID: covidwho-1916007

ABSTRACT

In recent years, periocular recognition has been developed as a valuable biometric identification approach, especially in wild environments (for example, masked faces due to COVID-19 pandemic) where facial recognition may not be applicable. This paper presents a new deep periocular recognition framework called attribute-based deep periocular recognition (ADPR), which predicts soft biometrics and incorporates the prediction into a periocular recognition algorithm to determine identity from periocular images with high accuracy. We propose an end-to-end framework, which uses several shared convolutional neural network (CNN) layers (a common network) whose output feeds two separate dedicated branches (modality dedicated layers);the first branch classifies periocular images while the second branch predicts soft biometrics. Next, the features from these two branches are fused together for a final periocular recognition. The proposed method is different from existing methods as it not only uses a shared CNN feature space to train these two tasks jointly, but it also fuses predicted soft biometric features with the periocular features in the training step to improve the overall periocular recognition performance. Our proposed model is extensively evaluated using four different publicly available datasets. Experimental results indicate that our soft biometric based periocular recognition approach outperforms other state-of-the-art methods for periocular recognition in wild environments.

9.
21st IEEE International Conference on Software Quality, Reliability and Security (QRS) ; : 192-195, 2021.
Article in English | Web of Science | ID: covidwho-1915994

ABSTRACT

Academic social network sites have become an important channel by which scholars obtain academic information. In the context of the COVID-19 pandemic, the use of academic social question and answer (Q&A) online platforms is a fast and efficient means of gathering information needed to solve research problems relating to the study of COVID-19. The question then is how to provide scholars with high-quality answers. Therefore, this research focuses on studying the characteristics of high-quality answers to COVID-19 questions on academic social Q&A platforms in terms of the answer content. By analyzing 6791 answers to 349 questions about COVID-19 on ResearchGate Q&A, high-quality academic answers on this topic should be rich in content, contain more negative emotions, be fluent in the use of language, and propose conjectures or hypotheses. This research helps to improve the provision of satisfactory academic information services for scholars during public health emergencies.

10.
Workshop on Visual Analytics in Healthcare (VAHC) ; : 4-16, 2020.
Article in English | Web of Science | ID: covidwho-1868556

ABSTRACT

Since Fall 2019, the spread of SARS-CoV-2 virus has changed everyday life routines globally. Public health confinement measures have been taken to contain the propagation of the pandemic. An international effort has been made to model and predict the spatio-temporal evolution of the pandemic. Today, a main question arises on how to communicate complex multivariate, geospatial and time dependent information efficiently. A further challenge consists in communicating these information without any bias or place for misinterpretation, and for the largest targeted audience. In this regard the following paper will first identify ergonomics criteria for efficient data visualization, and then present several visualizations in a pre/post fashion, reflecting how visualizations initially proposed by data scientists can be improved after the application of ergonomics guidelines.

11.
Workshop on Visual Analytics in Healthcare (VAHC) ; : 1-3, 2020.
Article in English | Web of Science | ID: covidwho-1868555

ABSTRACT

To manage a localized outbreak or global pandemic like COVID-19, Public Health agencies (PH) and health systems utilize a variety of information systems. Although existing PH information systems enable capture of data on laboratory-confirmed cases of COVID-19, the current pandemic has illuminated several deficits in the existing U.S. information infrastructure, including gaps in access to and visualization of near-real-time (daily) impacts to the healthcare system. To address these gaps, we leveraged our state-wide health information exchange-derived dataset that represents nearly all healthcare facilities in Indiana. The resultant dashboard has evolved to present data on hospitalization, emergency department utilization, and other metrics of interest to PH and a broader constituency across the state.

12.
IEEE Workshop on Visual Analytics in Healthcare (VAHC) ; : 19-24, 2021.
Article in English | Web of Science | ID: covidwho-1868554

ABSTRACT

As the COVID-19 pandemic continues to impact the world, data is being gathered and analyzed to better understand the disease. Recognizing the potential for visual analytics technologies to support exploratory analysis and hypothesis generation from longitudinal clinical data, a team of collaborators worked to apply existing event sequence visual analytics technologies to a longitudinal clinical data from a cohort of 998 patients with high rates of COVID-19 infection. This paper describes the initial steps toward this goal, including: (I) the data transformation and processing work required to prepare the data for visual analysis, (2) initial findings and observations, and (3) qualitative feedback and lessons learned which highlight key features as well as limitations to address in future work.

13.
IEEE Workshop on Visual Analytics in Healthcare (VAHC) ; : 1-5, 2021.
Article in English | Web of Science | ID: covidwho-1868553

ABSTRACT

The spread of the SARS-CoV-2 virus and its contagious disease COVID-19 has impacted countries to an extent not seen since the 1918 flu pandemic. In the absence of an effective vaccine and as cases surge worldwide, governments were forced to adopt measures to inhibit the spread of the disease. To reduce its impact and to guide policy planning and resource allocation, researchers have been developing models to forecast the infectious disease. Ensemble models, by aggregating forecasts from multiple individual models, have been shown to be a useful forecasting method. However, these models can still provide less-than-adequate forecasts at higher spatial resolutions. In this paper, we built COVID-19 EnsembleVis, a web-based interactive visual interface that allows the assessment of the errors of ensembles and individual models by enabling users to effortlessly navigate through and compare the outputs of models considering their space and time dimensions. COVID-19 EnsembleVis enables a more detailed understanding of uncertainty and the range of forecasts generated by individual models.

14.
23rd IEEE International Symposium on Multimedia (ISM) ; : 204-205, 2021.
Article in English | Web of Science | ID: covidwho-1868546

ABSTRACT

With the increasing popularity of e-commerce and the coronavirus situation, an increasing number of shoppers are opting for online shopping because of store closures and their fear of contracting the coronavirus in public. While conventional retail provides consumers with a full spectrum of interaction, online shopping has been deficient in these types of experiences. Therefore, the virtual reality technology is used to bridge the gap between the two shopping techniques and create a more natural and intuitive shopping environment. The aim of this research is to investigate how, in a virtual store environment, consumers' interaction affect their shopping experience. A virtual supermarket with the interaction facility was designed. An experiment was conducted to track the effect of the social interaction on the consumers using different metrics. The results showed that consumers' social interaction in the form of avatar-mediated communication has a beneficial impact on their social presence. It also demonstrates that consumers felt more immersed and socially engaged to the shopping environment via the social interaction among avatars.

15.
37th IEEE International Conference on Software Maintenance and Evolution (ICSME) ; : 515-524, 2021.
Article in English | Web of Science | ID: covidwho-1853453

ABSTRACT

One factor of success in software development companies is their ability to deliver good quality products, fast. For this, they need to improve their software development practices. We work with a medium-sized company modernizing its development practices. The company introduced several practices recommended in agile development. If the benefits of these practices are well documented, the impact of such changes on the developers is less well known. We follow this modernization before and during the COVID-19 outbreak. This paper presents an empirical study of the perceived benefit and drawback of these practices as well as the impact of COVID-19 on the company's employees. One of the conclusions, is the additional difficulties created by obsolete technologies to adapt the technology itself and the development practices it encourages to modern standards.

16.
36th IEEE/ACM International Conference on Automated Software Engineering (ASE) ; : 227-231, 2021.
Article in English | Web of Science | ID: covidwho-1816432

ABSTRACT

Context. Applying sentiment analysis is in general a laborious task. Furthermore, if we add the task of getting a good quality dataset with balanced distribution and enough samples, the job becomes more complicated. Objective. We want to find out whether merging compatible datasets improves emotion analysis based on machine learning (ML) techniques, compared to the original, individual datasets. Method. We obtained two datasets with Covid-19-related tweets written in Spanish, and then built from them two new datasets combining the original ones with different consolidation of balance. We analyzed the results according to precision, recall, F1-score and accuracy. Results. The results obtained show that merging two datasets can improve the performance of ML models, particularly the F1-score, when the merging process follows a strategy that optimizes the balance of the resulting dataset. Conclusions. Merging two datasets can improve the performance of ML models for emotion analysis, whilst saving resources for labeling training data. This might be especially useful for several software engineering activities that leverage on ML-based emotion analysis techniques.

17.
20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) ; : 57-62, 2021.
Article in English | Web of Science | ID: covidwho-1746051

ABSTRACT

Eye-gaze plays an essential role in interpersonal communication. Its role in face-to-face interactions and those in virtual environments (VE) has been extensively explored. However, the neural correlates of eye-gaze in inter-personal communication have not been explored exhaustively. The research detailed in this paper is an attempt to explore the neural correlates of eye gaze among two interacting individuals in a VE. The choice of using a VE has been motivated by the increasing frequency with which we use a desktop or Head Mounted Display (HMD) based VEs to interact with each other. The onset of the COVID-19 pandemic has accelerated the pace at which these technologies are being adopted for the purpose of remote collaboration. The pilot study described in this paper is an attempt to explore the effects of eye gaze on face-to-face interaction in a VE using the hyperscanning technique. This technique is used to measure neural activity and determine empirically whether the participants being measured display neural synchrony. Our results demonstrated that eye gaze directions appear to play a significant role in determining whether interacting individuals exhibit inter-brain synchrony. Results from this study can significantly benefit and contribute to positive outcomes for individuals with mental health disorders. We believe the techniques described here can be used to extend a high-quality mental health care to individuals irrespective of their geographical location.

18.
20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) ; : 87-91, 2021.
Article in English | Web of Science | ID: covidwho-1746050

ABSTRACT

Although Augmented Reality (AR) can be easily implemented with most smartphones and tablets today, the investigation of distance perception with these types of devices has been limited. In this paper, we question whether the distance of a virtual human, e.g., avatar, seen through a smartphone or tablet display is perceived accurately. We also investigate, due to the Covid-19 pandemic and increased sensitivity to distances to others, whether a coughing avatar that either does or does not have a mask on affects distance estimates compared to a static avatar. We performed an experiment in which all participants estimated the distances to avatars that were either static or coughing, with and without masks on. Avatars were placed at a range of distances that would be typical for interaction, i.e., action space. Data on judgments of distance to the varying avatars was collected in a distributed manner by deploying an app for smartphones. Results showed that participants were fairly accurate in estimating the distance to all avatars, regardless of coughing condition or mask condition. Such findings suggest that mobile AR applications can be used to obtain accurate estimations of distances to virtual others "in the wild," which is promising for using AR for simulations and training applications that require precise distance estimates.

19.
20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) ; : 346-351, 2021.
Article in English | Web of Science | ID: covidwho-1746049

ABSTRACT

For isolated patients, such as COVID-19 patients in an intensive care unit, conventional video tools can provide a degree of visual telepresence. However, video alone offers, at best, an approximation of a "through a window" metaphor-remote visitors, such as loved ones, cannot touch the patient to provide reassurance. Here, we present preliminary work aimed at providing an isolated patient and remote visitors with audiovisual interactions that are augmented by mediated social touch-the perception of being touched for the isolated patient, and the perception of touching for the remote visitor. We developed a tactile telepresence system prototype that provides a remote visitor with a tablet-based, touch-video interface for conveying touch patterns on the forehead of an isolated patient. The isolated patient can see the remote visitor, see themselves with the touch patterns indicated on their forehead, and feel the touch patterns through a vibrotactile headband interface. We motivate the work, describe the system prototype, and present results from pilot studies investigating the technical feasibility of the system, along with the social and emotional affects of using the prototype system.

20.
20th IEEE International Symposium on Mixed and Augmented Reality (ISMAR) ; : 415-420, 2021.
Article in English | Web of Science | ID: covidwho-1746048

ABSTRACT

Since the emergence of COVID-19 in late 2019, there has been a significant disturbance in human-to-human interaction that has changed the way we conduct user studies in the field of Human-Computer Interaction (HCI), especially for extended (augmented, mixed, and virtual) reality (XR). To uncover how XR research has adapted throughout the pandemic, this paper presents a review of user study methodology adaptations from a corpus of 951 papers. This corpus of papers covers CORE 2021 A* published conference submissions, from Q2 2020 through to Q1 2021 (IEEE ISMAR, ACM CHI, IEEE VR). The review highlights how methodologies were changed and reported;sparking discussions surrounding how methods should be conveyed and to what extent research should be contextualised, by drawing on external topical factors such as COVID-19, to maximise usefulness and perspective for future studies. We provide a set of initial guidelines based on our findings, posing key considerations for researchers when reporting on user studies during uncertain and unprecedented times.

SELECTION OF CITATIONS
SEARCH DETAIL